Skip to content

Question about onnx graph generation for differentiable QPLayer #390

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
gulshan216 opened this issue Apr 2, 2025 · 3 comments
Open

Question about onnx graph generation for differentiable QPLayer #390

gulshan216 opened this issue Apr 2, 2025 · 3 comments

Comments

@gulshan216
Copy link

Hi, first of all super excited about the using the capability for differentiable QPLayer in a neural network. My main question is whether the QPLayer is written in a way to support conversion to ONNX graph? Basically avoids dynamic control flow, so that the solver iterations can be unrolled easily?

@jcarpent
Copy link
Member

jcarpent commented Apr 5, 2025

Basically avoids dynamic control flow, so that the solver iterations can be unrolled easily?

Why do you want to unfold if you can compute forward and backward derivatives analytically?

@gulshan216
Copy link
Author

My main question is around if a model containing QPLayer would work with ONNX? We need to be able to convert the model to ONNX format primarily for inference. I tried exporting the model in the examples/python/qplayer_sudoku.py example to ONNX by the adding the following snippet of code below, but the exported model onnx_model.graph.input is always empty.

My initial intuition was that the QPLayer may not be supporting ONNX conversion, since ONNX is a static graph representation that may require the underlying QP solver iterations to be unfolded, but I may be wrong?

    model.eval()
    # Create a dummy input (batch size 1)
    dummy_input = torch.randn(1, trainX.size(1), trainX.size(2), trainX.size(3))

    import torch.onnx
    onnx_filename = "qplayer_sudoku.onnx"

    # Export the model
    torch.onnx.export(
        model,               # The PyTorch model
        dummy_input,         # The dummy input to trace the model
        onnx_filename,       # The output ONNX file path
        export_params=True,  # Export the model parameters (weights)
        opset_version=12,    # ONNX opset version (choose an appropriate version)
        do_constant_folding=True,  # Apply optimizations (constant folding)
        input_names=['input'],     # Input name (useful for later inference)
        output_names=['output'],   # Output name
    )

    import onnx

    # Load the ONNX model
    onnx_model = onnx.load("qplayer_sudoku.onnx")
    # Iterate over input nodes to check their details
    for input_tensor in onnx_model.graph.input:
        print(f"Input Name: {input_tensor.name}")
        print(f"Input Type: {input_tensor.type}")
        for dim in input_tensor.type.tensor_type.shape.dim:
            print(f"Dimension: {dim.dim_value}")

@gulshan216
Copy link
Author

gulshan216 commented Apr 7, 2025

If QPLayer doesn't support ONNX conversion, my less than ideal alternative would be to skip the QPLayer during the ONNX graph conversion by splitting the model into 2 sub-graphs. And replace the QPLayer with the same formulation (with same solver settings) running using the proxqp c++ interface at inference time. But the manual burden of maintaining the python QP formulation and C++ formulation with proxqp in sync would be bad. Really appreciate your inputs here!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants